41 research outputs found

    Probabilistic Global Scale Estimation for MonoSLAM Based on Generic Object Detection

    Full text link
    This paper proposes a novel method to estimate the global scale of a 3D reconstructed model within a Kalman filtering-based monocular SLAM algorithm. Our Bayesian framework integrates height priors over the detected objects belonging to a set of broad predefined classes, based on recent advances in fast generic object detection. Each observation is produced on single frames, so that we do not need a data association process along video frames. This is because we associate the height priors with the image region sizes at image places where map features projections fall within the object detection regions. We present very promising results of this approach obtained on several experiments with different object classes.Comment: Int. Workshop on Visual Odometry, CVPR, (July 2017

    SocialVAE: Human Trajectory Prediction using Timewise Latents

    Full text link
    Predicting pedestrian movement is critical for human behavior analysis and also for safe and efficient human-agent interactions. However, despite significant advancements, it is still challenging for existing approaches to capture the uncertainty and multimodality of human navigation decision making. In this paper, we propose SocialVAE, a novel approach for human trajectory prediction. The core of SocialVAE is a timewise variational autoencoder architecture that exploits stochastic recurrent neural networks to perform prediction, combined with a social attention mechanism and backward posterior approximation to allow for better extraction of pedestrian navigation strategies. We show that SocialVAE improves current state-of-the-art performance on several pedestrian trajectory prediction benchmarks, including the ETH/UCY benchmark, the Stanford Drone Dataset and SportVU NBA movement dataset. Code is available at: https://github.com/xupei0610/SocialVAE

    Cooperative SLAM-based object transportation by two humanoid robots in a cluttered environment

    Get PDF
    International audienceIn this work, we tackle the problem of making two humanoid robots navigate in a cluttered environment while transporting a very large object that simply can not be moved by a single robot. We present a complete navigation scheme, from the incremental construction of a map of the environment and the computation of collision-free trajectories to the control to execute those trajectories. We present experiments conducted on real Nao robots, equipped with RGB-D sensors mounted on their heads, moving an object around obstacles. Our experiments show that a significantly large object can be transported without changing the robot's main hardware, and therefore enacting the capacity of humanoid robots in real-life situations

    On-Line Rectification of Sport Sequences with Moving Cameras

    Full text link
    peer reviewe

    Contribution à la navigation d'un robot mobile sur amers visuels texturés dans un environnement structuré

    No full text
    TOULOUSE3-BU Sciences (315552104) / SudocSudocFranceF

    Robust Extrinsic Camera Calibration from Trajectories in Human-Populated Environments

    No full text
    Abstract. This paper proposes a novel robust approach to perform inter-camera and ground-camera calibration in the context of visual monitoring of human-populated areas. By supposing that the monitored agents evolve on a single plane and that the cameras intrinsic parameters are known, we use the image trajectories of moving objects as tracked by standard trackers in a RANSAC paradigm to estimate the extrinsic parameters of the different cameras. We illustrate the performance of our algorithm on several challenging experimental setups and compare it to existing approaches

    Vision-driven walking pattern generation for humanoid reactive walking

    No full text
    International audienceWe present a novel approach to introduce visual information in the walking pattern generator for humanoid robots in a more direct way than the current existing methods. We make use of a model predictive control (MPC) visual servoing strategy, which is combined to the walking motion generator. We define two schemes based on that principle: a position-based and an image-based scheme, with a Quadratic Program (QP) formulation in both cases. Finally, we present some simulation results validating our approach
    corecore